大量标记的医学图像对于准确检测异常是必不可少的,但是手动注释是劳动密集型且耗时的。自我监督学习(SSL)是一种培训方法,可以在没有手动注释的情况下学习特定于数据的功能。在医学图像异常检测中已采用了几种基于SSL的模型。这些SSL方法有效地学习了几个特定特定图像的表示形式,例如自然和工业产品图像。但是,由于需要医学专业知识,典型的基于SSL的模型在医疗图像异常检测中效率低下。我们提出了一个基于SSL的模型,该模型可实现基于解剖结构的无监督异常检测(UAD)。该模型采用解剖学意识粘贴(Anatpaste)增强工具。 Anatpaste采用基于阈值的肺部分割借口任务来在正常的胸部X光片上创建异常,用于模型预处理。这些异常类似于实际异常,并帮助模型识别它们。我们在三个OpenSource胸部X光片数据集上评估了我们的模型。我们的模型在曲线(AUC)下展示了92.1%,78.7%和81.9%的模型,在现有UAD模型中最高。这是第一个使用解剖信息作为借口任务的SSL模型。 Anatpaste可以应用于各种深度学习模型和下游任务。它可以通过修复适当的细分来用于其他方式。我们的代码可在以下网址公开获取:https://github.com/jun-sato/anatpaste。
translated by 谷歌翻译
We introduce KiloGram, a resource for studying abstract visual reasoning in humans and machines. Drawing on the history of tangram puzzles as stimuli in cognitive science, we build a richly annotated dataset that, with >1k distinct stimuli, is orders of magnitude larger and more diverse than prior resources. It is both visually and linguistically richer, moving beyond whole shape descriptions to include segmentation maps and part labels. We use this resource to evaluate the abstract visual reasoning capacities of recent multi-modal models. We observe that pre-trained weights demonstrate limited abstract reasoning, which dramatically improves with fine-tuning. We also observe that explicitly describing parts aids abstract reasoning for both humans and models, especially when jointly encoding the linguistic and visual inputs. KiloGram is available at https://lil.nlp.cornell.edu/kilogram .
translated by 谷歌翻译
We present lilGym, a new benchmark for language-conditioned reinforcement learning in visual environments. lilGym is based on 2,661 highly-compositional human-written natural language statements grounded in an interactive visual environment. We annotate all statements with executable Python programs representing their meaning to enable exact reward computation in every possible world state. Each statement is paired with multiple start states and reward functions to form thousands of distinct Markov Decision Processes of varying difficulty. We experiment with lilGym with different models and learning regimes. Our results and analysis show that while existing methods are able to achieve non-trivial performance, lilGym forms a challenging open problem. lilGym is available at https://lil.nlp.cornell.edu/lilgym/.
translated by 谷歌翻译